You are on page 1of 1

A20

Sep t em ber 21, 2012

T he Chron icle of H igh er Educat ion

NEWS

rESEarch

New Formula Beats Citation Index and Can Predict Success, 3 Scientists Say
By Paul Basken irst there was the impact factor. Then came the hindex. Now, for those who believe that scientific prowess can be measured by statistical metrics, theres the Acuna-Allesina-Krding formula. The formula, outlined last week in the journal Nature, is intended to improve upon the h-indexa tally of a

peRcolatoR
More on research at chronicle.com /percolator

researchers leading publications and citationsby adding a few more numerical measures of a scientists publishing history to allow for predictions of future success. The idea, said the papers senior author, Konrad P. Krding, an associate professor of medicine and rehabilitation/physiology at Northwestern University, is to help universities and grant-writing agencies support someone who will have high impact in the future. Mr. Krding readily admitted that his methodtweaking the h-index by adding numbers like years of publication and number of distinct journalscannot be perfect and shouldnt be a substitute for thoughtful human analysis of a researchers past writings and future goals. But even careful subjective reviews have their limits, especially in the real world of deadline pressures and global competition, Mr. Krding said. Both ways of evaluating people, he said, have advantages and disadvantages. Despite that caution, the idea put forth by Mr. Krding, along with Daniel E. Acuna, a postdoctoral student in his laboratory, and Stefano Allesina, an assistant professor of ecology and evolution at the University of Chicago, is already generating some of the same divisions that have surrounded the impact factor and the h-index. Its really disturbing, said Robert H. Austin, a professor of physics at Princeton University, referring to the trend in which he now sees hiring panels requiring candidates to state their h-index, and applicants sometimes offering it at the top of their rsums. The focus on the h-index, said John M. Drake, an associate professor of ecology at the University of Georgia, is leading researchers to choose popular and established topics that are likely to win citations from other authors. That, he said, would seem to be the opposite of creativity.
History of Attempts

1950s by Eugene Garfield, a librarian pursuing a doctorate in structural linguistics. His simple formuladividing the number of citations a scientific journal received in the two previous years by the number of articles it publishedsparked a revolution in establishing the reputations of the journals and the scientists who write for them. It also spawned resentment, with scientists complaining of the biases it creates and the gamesmanship it encourages. Among the more blatant cases of alleged abuse, journals eager to raise their impact factors have suggested that prospective writers mention some of their previously published articles. Then came the h-index, created in 2005 by Jorge E. Hirsch, a professor of physics at the University of California at San Diego. Its also a simple measure, but aimed more at ranking the researcher than the journals. Its the number of a researchers published papers that have been cited at least as many times as that number itself. An index of h means a scholar has published h papers that have been cited in other papers at least h times. While it quickly gained popularity, the h-index suffered from the sense

suggesting otherwise. Their proposal adds to the h-index by including a scientists total number of articles, the number of years since the first one was published, the number of journals theyve appeared in, and the number of top journals. The new formula, Mr. Krding said, has proved more than twice as accurate as the h-index for predicting the future success of researchers in the life sciences. He was driven to pursue the formula, he said, out of a basic curiosity about science and how it works. Im a scientist, he said, so as a scientist I cant avoid basically asking myself about the future, about my career, about the career of friends, about the careers of people that I know, and which factors drive good science. He also acknowledged the basic angst among scientists wondering if their work is ultimately either useful or worthless to society.
Value Questioned

Mr. Hirsch isnt impressed. After reviewing the Acuna-Allesina-Krding paper, he said the factors added to his h-index appeared to have little meaningful effect. He suggested the additional factors had been devised by optimizing the coefficients for a

completely wrong, not even remotely close. Mr. Krding rejected the criticism, saying he relied on the figures publicly available at Google Scholar. Another critic of the overall use of such statistical measurements, Anurag A. Agrawal, a professor of ecology and evolutionary biology at Cornell University, said he is gradually realizing that metrics like the impact factor and h-index will soon grow obsolete. Eventually, Mr. Agrawal said, technology will allow better direct measurements of the value of each piece of published research. It wont be so important to estimate a researchers value in five or 10 years, when data can show almost immediately how many people have actually downloaded and used a published journal article, he said. Mr. Krding also believes the future will improve the degree to which statistical measures complement subjective human evaluations. He acknowledged that both impact factor and h-index can be gamed and can create harmful and perverse incentives. But he said similar problems faced Google, and the reason the companys search engine has survived and prevailed is that its engineers are constantly adjusting it when

he said. And rather than punish creative adventurers who dare to tread into areas not yet recognized by other scientists, better systems of talent evaluation might mean a person with the talent to lead an intellectual revolution might actually get the money needed to do it, he said. In a system where we have better metrics, Mr. Krding said, the people who could make most out of the money are more likely to get it.

Scientists May Be Responsible for Spin


In recent years, newspapers have been full of articles touting the health benefits of coffee: It cuts the risk of heart attack, stroke, and various kinds of cancers. Yet some studies have also raised warnings, saying coffee can encourage overeating and, yes, even increase heart-attack risk. Similar uncertaintiesat least as reflected in newspaper articles and TV news reportssurround red wine, aspirin, estrogen supplements, prostate screening, and many other foods, pharmaceuticals, and medical procedures. Whats going on? Are science reporters unable to make sense out of medical research? Are they overemphasizing minor fluctuations in study findings just to attract readers? The answer, according to an analysis published last week in the journal PLoS Medicine, is neither. Instead, according to one of its authors, Isabelle Boutron, an assistant professor of epidemiology at University of Paris V: Ren Descartes, the tendency to emphasize medical benefits over risks is most often attributable to the presentations by scientists in their own journal articles. The study was based on an examination of 70 articles that appeared in scientific journals from December 2009 to March 2010, reporting the results of randomized, controlled trials conducted at a range of universities in the United States and abroad. The study also included a review of the press releases that accompanied those articles, and the subsequent news coverage. Of the 41 news stories that could be associated with those studies, a majority of them21contained spin, which Dr. Boutron and her team defined as a specific reporting strategy, intentional or unintentional, that emphasized the beneficial effect of the treatment being tested. In most cases, though, that spin could be traced back to the journal article itself, Dr. Boutron concluded. Her analysis found spin in 40 percent of the study abstracts published in the scientific journals, and in 47 percent of the accompanying news releases. These findings show that spin in press releases and news reports is related to the presence of spin in the abstract of peer-reviewed reports, Dr. Boutron wrote. P.B.

Konrad Krding (left) and Daniel Acuna tackled the h-index because of basic angst over whether scientists work is useful.

BEN WAlKER

The forerunner of the h-index, the impact factor, was devised in the

that it mostly confirmed past success. Thats because those with the largest h-index were readily seen as renowned scientists late in their careers. Mr. Hirsch recognized the problem and wrote a follow-up analysis in 2007 in which he tested the ability of several other variants to provide greater predictions of a scientists future career prospects. His conclusion, published in PNAS, the Proceedings of the National Academies of Science, was that none did. Mr. Krding and his team are now

particular set of authors covered by the paper. He said the predictive powers would not hold up for a wider set of test cases. I would expect that dropping the h-index from the formula would have a major effect, and dropping any of the other criteria a minor effect, Mr. Hirsch told The Chronicle. He also chided the authors for their calculations of the h-index for several famous scientists, Albert Einstein, Charles Darwin, and Richard Feynman, saying the numbers given in the Acuna-Allesina-Krding paper were

they learn how people seek to manipulate its results. Basically, if you have strong statisticians who build metrics, he said, these guys will always be ahead of those people that game it. As for the effects of such improved metrics, Mr. Krding anticipates a world in which universities and grantwriting agencies work more efficiently, to the betterment of science. Because better metrics may help all sides recognize young talent, it might make the rich universities richer and the poor universities poorer,

You might also like